79 research outputs found

    The emergence of synaesthesia in a Neuronal Network Model via changes in perceptual sensitivity and plasticity

    Get PDF
    Synaesthesia is an unusual perceptual experience in which an inducer stimulus triggers a percept in a different domain in addition to its own. To explore the conditions under which synaesthesia evolves, we studied a neuronal network model that represents two recurrently connected neural systems. The interactions in the network evolve according to learning rules that optimize sensory sensitivity. We demonstrate several scenarios, such as sensory deprivation or heightened plasticity, under which synaesthesia can evolve even though the inputs to the two systems are statistically independent and the initial cross-talk interactions are zero. Sensory deprivation is the known causal mechanism for acquired synaesthesia and increased plasticity is implicated in developmental synaesthesia. The model unifies different causes of synaesthesia within a single theoretical framework and repositions synaesthesia not as some quirk of aberrant connectivity, but rather as a functional brain state that can emerge as a consequence of optimising sensory information processing

    Gain control network conditions in early sensory coding

    Get PDF
    Gain control is essential for the proper function of any sensory system. However, the precise mechanisms for achieving effective gain control in the brain are unknown. Based on our understanding of the existence and strength of connections in the insect olfactory system, we analyze the conditions that lead to controlled gain in a randomly connected network of excitatory and inhibitory neurons. We consider two scenarios for the variation of input into the system. In the first case, the intensity of the sensory input controls the input currents to a fixed proportion of neurons of the excitatory and inhibitory populations. In the second case, increasing intensity of the sensory stimulus will both, recruit an increasing number of neurons that receive input and change the input current that they receive. Using a mean field approximation for the network activity we derive relationships between the parameters of the network that ensure that the overall level of activity of the excitatory population remains unchanged for increasing intensity of the external stimulation. We find that, first, the main parameters that regulate network gain are the probabilities of connections from the inhibitory population to the excitatory population and of the connections within the inhibitory population. Second, we show that strict gain control is not achievable in a random network in the second case, when the input recruits an increasing number of neurons. Finally, we confirm that the gain control conditions derived from the mean field approximation are valid in simulations of firing rate models and Hodgkin-Huxley conductance based models

    Brain Performance versus Phase Transitions

    Get PDF
    We here illustrate how a well-founded study of the brain may originate in assuming analogies with phase-transition phenomena. Analyzing to what extent a weak signal endures in noisy environments, we identify the underlying mechanisms, and it results a description of how the excitability associated to (non-equilibrium) phase changes and criticality optimizes the processing of the signal. Our setting is a network of integrate-and-fire nodes in which connections are heterogeneous with rapid time-varying intensities mimicking fatigue and potentiation. Emergence then becomes quite robust against wiring topology modificationβ€”in fact, we considered from a fully connected network to the Homo sapiens connectomeβ€”showing the essential role of synaptic flickering on computations. We also suggest how to experimentally disclose significant changes during actual brain operation.The authors acknowledge support from the Spanish Ministry of Economy and Competitiveness under the project FIS2013-43201-P

    Frustrated hierarchical synchronization and emergent complexity in the human connectome network

    Get PDF
    The spontaneous emergence of coherent behavior through synchronization plays a key role in neural function, and its anomalies often lie at the basis of pathologies. Here we employ a parsimonious (mesoscopic) approach to study analytically and computationally the synchronization (Kuramoto) dynamics on the actual human-brain connectome network. We elucidate the existence of a so-far-uncovered intermediate phase, placed between the standard synchronous and asynchronous phases, i.e. between order and disorder. This novel phase stems from the hierarchical modular organization of the connectome. Where one would expect a hierarchical synchronization process, we show that the interplay between structural bottlenecks and quenched intrinsic frequency heterogeneities at many different scales, gives rise to frustrated synchronization, metastability, and chimera-like states, resulting in a very rich and complex phenomenology. We uncover the origin of the dynamic freezing behind these features by using spectral graph theory and discuss how the emerging complex synchronization patterns relate to the need for the brain to access –in a robust though flexible way– a large variety of functional attractors and dynamical repertoires without ad hoc fine-tuning to a critical pointWe acknowledge financial support from J. de AndalucΓ­a, grant P09-FQM-4682 and we thank O. Sporns for providing us access to the human connectome data

    Accurate and Fast Simulation of Channel Noise in Conductance-Based Model Neurons by Diffusion Approximation

    Get PDF
    Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here

    Fast Coding of Orientation in Primary Visual Cortex

    Get PDF
    Understanding how populations of neurons encode sensory information is a major goal of systems neuroscience. Attempts to answer this question have focused on responses measured over several hundred milliseconds, a duration much longer than that frequently used by animals to make decisions about the environment. How reliably sensory information is encoded on briefer time scales, and how best to extract this information, is unknown. Although it has been proposed that neuronal response latency provides a major cue for fast decisions in the visual system, this hypothesis has not been tested systematically and in a quantitative manner. Here we use a simple β€˜race to threshold’ readout mechanism to quantify the information content of spike time latency of primary visual (V1) cortical cells to stimulus orientation. We find that many V1 cells show pronounced tuning of their spike latency to stimulus orientation and that almost as much information can be extracted from spike latencies as from firing rates measured over much longer durations. To extract this information, stimulus onset must be estimated accurately. We show that the responses of cells with weak tuning of spike latency can provide a reliable onset detector. We find that spike latency information can be pooled from a large neuronal population, provided that the decision threshold is scaled linearly with the population size, yielding a processing time of the order of a few tens of milliseconds. Our results provide a novel mechanism for extracting information from neuronal populations over the very brief time scales in which behavioral judgments must sometimes be made

    From Spiking Neuron Models to Linear-Nonlinear Models

    Get PDF
    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-BuzsΓ‘ki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-BuzsΓ‘ki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates

    Mechanisms underlying a thalamocortical transformation during active tactile sensation

    Get PDF
    During active somatosensation, neural signals expected from movement of the sensors are suppressed in the cortex, whereas information related to touch is enhanced. This tactile suppression underlies low-noise encoding of relevant tactile features and the brain’s ability to make fine tactile discriminations. Layer (L) 4 excitatory neurons in the barrel cortex, the major target of the somatosensory thalamus (VPM), respond to touch, but have low spike rates and low sensitivity to the movement of whiskers. Most neurons in VPM respond to touch and also show an increase in spike rate with whisker movement. Therefore, signals related to self-movement are suppressed in L4. Fast-spiking (FS) interneurons in L4 show similar dynamics to VPM neurons. Stimulation of halorhodopsin in FS interneurons causes a reduction in FS neuron activity and an increase in L4 excitatory neuron activity. This decrease of activity of L4 FS neurons contradicts the "paradoxical effect" predicted in networks stabilized by inhibition and in strongly-coupled networks. To explain these observations, we constructed a model of the L4 circuit, with connectivity constrained by in vitro measurements. The model explores the various synaptic conductance strengths for which L4 FS neurons actively suppress baseline and movement-related activity in layer 4 excitatory neurons. Feedforward inhibition, in concert with recurrent intracortical circuitry, produces tactile suppression. Synaptic delays in feedforward inhibition allow transmission of temporally brief volleys of activity associated with touch. Our model provides a mechanistic explanation of a behavior-related computation implemented by the thalamocortical circuit

    Representing Where along with What Information in a Model of a Cortical Patch

    Get PDF
    Behaving in the real world requires flexibly combining and maintaining information about both continuous and discrete variables. In the visual domain, several lines of evidence show that neurons in some cortical networks can simultaneously represent information about the position and identity of objects, and maintain this combined representation when the object is no longer present. The underlying network mechanism for this combined representation is, however, unknown. In this paper, we approach this issue through a theoretical analysis of recurrent networks. We present a model of a cortical network that can retrieve information about the identity of objects from incomplete transient cues, while simultaneously representing their spatial position. Our results show that two factors are important in making this possible: A) a metric organisation of the recurrent connections, and B) a spatially localised change in the linear gain of neurons. Metric connectivity enables a localised retrieval of information about object identity, while gain modulation ensures localisation in the correct position. Importantly, we find that the amount of information that the network can retrieve and retain about identity is strongly affected by the amount of information it maintains about position. This balance can be controlled by global signals that change the neuronal gain. These results show that anatomical and physiological properties, which have long been known to characterise cortical networks, naturally endow them with the ability to maintain a conjunctive representation of the identity and location of objects
    • …
    corecore